perm filename FIFE.ME1[LET,JMC] blob
sn#345029 filedate 1978-03-29 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .require "let.pub[let,jmc]" source
C00006 ENDMK
C⊗;
.require "let.pub[let,jmc]" source;
∂MEM Debby Fife, Managing Editor, %2The Stanford Magazine%1$$Dave Cudhea's article∞
As I said on the phone, the article is excellent - the best
I can remember. I especially like the low key and objective description
of the differing points of view about what is required for human-level
artificial intelligence.
Here is the elaboration I promised of the statement about the potential
danger of AI.
"When true high-level artificial intelligence approaches, and
we understand better what it is like, then we can %2decide%1 how to
get the benefits and protect ourselves from the dangers. Present views
of what human-level intelligence would be like have been formed by
legend and science fiction. There are two themes - the wishing cap
theme and the robot theme. In the wishing cap theme, a wish is granted
and interpreted literally (by the author) with unpleasant results. In
the robot theme, a robot is a kind of person - a world conqueror in
1930s fiction, an oppressed minority in the 1950s, an ordinary neurotic
in the 1960s, and even as a dictatorial all-powerful psychiatric social
worker. As soon as we get enough AI to help with the task,
we should use it to help depict as clearly as possible, the human
consequences of the various policies that might be adopted towards
its use. However, it seems to be very unlikely that it will be to
our advantage to create science fiction type robots with purposes of
their own and to which it is reasonable to ascribe rights and
duties. For example, with humans, subgoals
often come to dominate the main goal that generated them, and a human's
attitude towards his long term goals are affected by the momentary
chemical state of his blood stream.
An artificial intelligence probably should have no goals of its own."
The above somewhat complicated considerations may be important
to the layman, since there is much talk these days of the dangers of
one or another scientific or technological development. I have some
fear that if nothing is said, the layman will assume that the AI
community has given the matter no thought.
Dave has done an excellent job of describing complicated
matters in layman's terms and he might try his hand at this. However,
if it comes out such a can of worms that he prefers his original
formulation, that is also ok with me.